On Adaptive Regularization Methods in Boosting

نویسندگان

  • Mark Culp
  • George Michailidis
  • Kjell Johnson
چکیده

Boosting algorithms build models on dictionaries of learners constructed from the data, where a coefficient in this model relates to the contribution of a particular learner relative to the other learners in the dictionary. Regularization for these models is currently implemented by iteratively applying a simple local tolerance parameter, which scales each coefficient towards zero. Stochastic enhancements, such as bootstrapping, incorporate a random mechanism in the construction of the ensemble to improve robustness, reduce computation time, and improve accuracy. In this paper, we propose a novel local estimation scheme for direct data-driven estimation of regularization parameters in boosting algorithms with stochastic enhancements based on a penalized loss optimization framework. In addition, k-fold cross-validated estimates of this penalty are obtained during its construction. This leads to a computationally fast and effective way of estimating this parameter for boosting algorithms with stochastic enhancements. The procedure is illustrated on both real and synthetic data. The R code used in this manuscript is available as supplemental material.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Discussion of “ Boosting Algorithms : Regularization , Prediction and Model Fitting ” by Peter Bühlmann and Torsten Hothorn

We congratulate the authors (hereafter BH) for an interesting take on the boosting technology, and for developing a modular computational environment in R for exploring their models. Their use of low-degree-of-freedom smoothing splines as a base learner provides an interesting approach to adaptive additive modeling. The notion of “Twin Boosting” is interesting as well; besides the adaptive lass...

متن کامل

Comment: Boosting Algorithms: Regularization, Prediction and Model Fitting

We congratulate the authors (hereafter BH) for an interesting take on the boosting technology, and for developing a modular computational environment in R for exploring their models. Their use of low-degree-offreedom smoothing splines as a base learner provides an interesting approach to adaptive additive modeling. The notion of “Twin Boosting” is interesting as well; besides the adaptive lasso...

متن کامل

Boosting on Manifolds: Adaptive Regularization of Base Classifiers

In this paper we propose to combine two powerful ideas, boosting and manifold learning. On the one hand, we improve ADABOOST by incorporating knowledge on the structure of the data into base classifier design and selection. On the other hand, we use ADABOOST’s efficient learning mechanism to significantly improve supervised and semi-supervised algorithms proposed in the context of manifold lear...

متن کامل

On the Rate of Convergence of Regularized Boosting Classifiers

A regularized boosting method is introduced, for which regularization is obtained through a penalization function. It is shown through oracle inequalities that this method is model adaptive. The rate of convergence of the probability of misclassification is investigated. It is shown that for quite a large class of distributions, the probability of error converges to the Bayes risk at a rate fas...

متن کامل

Topics in Regularization and Boosting

Regularization is critical for successful statistical modeling of “modern” data, which is high-dimensional, sometimes noisy and often contains a lot of irrelevant predictors. It exists — implicitly or explicitly — at the heart of all successful methods. The two main challenges which we take on in this thesis are understanding its various aspects better and suggesting new regularization approach...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2010